364 research outputs found

    Adaptive Nonlocal Filtering: A Fast Alternative to Anisotropic Diffusion for Image Enhancement

    Full text link
    The goal of many early visual filtering processes is to remove noise while at the same time sharpening contrast. An historical succession of approaches to this problem, starting with the use of simple derivative and smoothing operators, and the subsequent realization of the relationship between scale-space and the isotropic dfffusion equation, has recently resulted in the development of "geometry-driven" dfffusion. Nonlinear and anisotropic diffusion methods, as well as image-driven nonlinear filtering, have provided improved performance relative to the older isotropic and linear diffusion techniques. These techniques, which either explicitly or implicitly make use of kernels whose shape and center are functions of local image structure are too computationally expensive for use in real-time vision applications. In this paper, we show that results which are largely equivalent to those obtained from geometry-driven diffusion can be achieved by a process which is conceptually separated info two very different functions. The first involves the construction of a vector~field of "offsets", defined on a subset of the original image, at which to apply a filter. The offsets are used to displace filters away from boundaries to prevent edge blurring and destruction. The second is the (straightforward) application of the filter itself. The former function is a kind generalized image skeletonization; the latter is conventional image filtering. This formulation leads to results which are qualitatively similar to contemporary nonlinear diffusion methods, but at computation times that are roughly two orders of magnitude faster; allowing applications of this technique to real-time imaging. An additional advantage of this formulation is that it allows existing filter hardware and software implementations to be applied with no modification, since the offset step reduces to an image pixel permutation, or look-up table operation, after application of the filter

    Real-Time Anisotropic Diffusion using Space-Variant Vision

    Full text link
    Many computer and robot vision applications require multi-scale image analysis. Classically, this has been accomplished through the use of a linear scale-space, which is constructed by convolution of visual input with Gaussian kernels of varying size (scale). This has been shown to be equivalent to the solution of a linear diffusion equation on an infinite domain, as the Gaussian is the Green's function of such a system (Koenderink, 1984). Recently, much work has been focused on the use of a variable conductance function resulting in anisotropic diffusion described by a nonlinear partial differential equation (PDF). The use of anisotropic diffusion with a conductance coefficient which is a decreasing function of the gradient magnitude has been shown to enhance edges, while decreasing some types of noise (Perona and Malik, 1987). Unfortunately, the solution of the anisotropic diffusion equation requires the numerical integration of a nonlinear PDF which is a costly process when carried out on a fixed mesh such as a typical image. In this paper we show that the complex log transformation, variants of which are universally used in mammalian retino-cortical systems, allows the nonlinear diffusion equation to be integrated at exponentially enhanced rates due to the non-uniform mesh spacing inherent in the log domain. The enhanced integration rates, coupled with the intrinsic compression of the complex log transformation, yields a seed increase of between two and three orders of magnitude, providing a means of performing real-time image enhancement using anisotropic diffusion.Office of Naval Research (N00014-95-I-0409

    PSACNN: Pulse Sequence Adaptive Fast Whole Brain Segmentation

    Full text link
    With the advent of convolutional neural networks~(CNN), supervised learning methods are increasingly being used for whole brain segmentation. However, a large, manually annotated training dataset of labeled brain images required to train such supervised methods is frequently difficult to obtain or create. In addition, existing training datasets are generally acquired with a homogeneous magnetic resonance imaging~(MRI) acquisition protocol. CNNs trained on such datasets are unable to generalize on test data with different acquisition protocols. Modern neuroimaging studies and clinical trials are necessarily multi-center initiatives with a wide variety of acquisition protocols. Despite stringent protocol harmonization practices, it is very difficult to standardize the gamut of MRI imaging parameters across scanners, field strengths, receive coils etc., that affect image contrast. In this paper we propose a CNN-based segmentation algorithm that, in addition to being highly accurate and fast, is also resilient to variation in the input acquisition. Our approach relies on building approximate forward models of pulse sequences that produce a typical test image. For a given pulse sequence, we use its forward model to generate plausible, synthetic training examples that appear as if they were acquired in a scanner with that pulse sequence. Sampling over a wide variety of pulse sequences results in a wide variety of augmented training examples that help build an image contrast invariant model. Our method trains a single CNN that can segment input MRI images with acquisition parameters as disparate as T1T_1-weighted and T2T_2-weighted contrasts with only T1T_1-weighted training data. The segmentations generated are highly accurate with state-of-the-art results~(overall Dice overlap=0.94=0.94), with a fast run time~(\approx 45 seconds), and consistent across a wide range of acquisition protocols.Comment: Typo in author name corrected. Greves -> Grev

    Insight into the fundamental trade-offs of diffusion MRI from polarization-sensitive optical coherence tomography in ex vivo human brain

    Get PDF
    In the first study comparing high angular resolution diffusion MRI (dMRI) in the human brain to axonal orientation measurements from polarization-sensitive optical coherence tomography (PSOCT), we compare the accuracy of orientation estimates from various dMRI sampling schemes and reconstruction methods. We find that, if the reconstruction approach is chosen carefully, single-shell dMRI data can yield the same accuracy as multi-shell data, and only moderately lower accuracy than a full Cartesian-grid sampling scheme. Our results suggest that current dMRI reconstruction approaches do not benefit substantially from ultra-high b-values or from very large numbers of diffusion-encoding directions. We also show that accuracy remains stable across dMRI voxel sizes of 1 ​mm or smaller but degrades at 2 ​mm, particularly in areas of complex white-matter architecture. We also show that, as the spatial resolution is reduced, axonal configurations in a dMRI voxel can no longer be modeled as a small set of distinct axon populations, violating an assumption that is sometimes made by dMRI reconstruction techniques. Our findings have implications for in vivo studies and illustrate the value of PSOCT as a source of ground-truth measurements of white-matter organization that does not suffer from the distortions typical of histological techniques.Published versio

    Colocalization of neurons in optical coherence microscopy and Nissl-stained histology in Brodmann’s area 32 and area 21

    Full text link
    Published in final edited form as: Brain Struct Funct. 2019 January ; 224(1): 351–362. doi:10.1007/s00429-018-1777-z.Optical coherence tomography is an optical technique that uses backscattered light to highlight intrinsic structure, and when applied to brain tissue, it can resolve cortical layers and fiber bundles. Optical coherence microscopy (OCM) is higher resolution (i.e., 1.25 µm) and is capable of detecting neurons. In a previous report, we compared the correspondence of OCM acquired imaging of neurons with traditional Nissl stained histology in entorhinal cortex layer II. In the current method-oriented study, we aimed to determine the colocalization success rate between OCM and Nissl in other brain cortical areas with different laminar arrangements and cell packing density. We focused on two additional cortical areas: medial prefrontal, pre-genual Brodmann area (BA) 32 and lateral temporal BA 21. We present the data as colocalization matrices and as quantitative percentages. The overall average colocalization in OCM compared to Nissl was 67% for BA 32 (47% for Nissl colocalization) and 60% for BA 21 (52% for Nissl colocalization), but with a large variability across cases and layers. One source of variability and confounds could be ascribed to an obscuring effect from large and dense intracortical fiber bundles. Other technical challenges, including obstacles inherent to human brain tissue, are discussed. Despite limitations, OCM is a promising semi-high throughput tool for demonstrating detail at the neuronal level, and, with further development, has distinct potential for the automatic acquisition of large databases as are required for the human brain.Accepted manuscrip

    BrainPrint: A discriminative characterization of brain morphology

    Get PDF
    We introduce BrainPrint, a compact and discriminative representation of brain morphology. BrainPrint captures shape information of an ensemble of cortical and subcortical structures by solving the eigenvalue problem of the 2D and 3D Laplace–Beltrami operator on triangular (boundary) and tetrahedral (volumetric) meshes. This discriminative characterization enables new ways to study the similarity between brains; the focus can either be on a specific brain structure of interest or on the overall brain similarity. We highlight four applications for BrainPrint in this article: (i) subject identification, (ii) age and sex prediction, (iii) brain asymmetry analysis, and (iv) potential genetic influences on brain morphology. The properties of BrainPrint require the derivation of new algorithms to account for the heterogeneous mix of brain structures with varying discriminative power. We conduct experiments on three datasets, including over 3000 MRI scans from the ADNI database, 436 MRI scans from the OASIS dataset, and 236 MRI scans from the VETSA twin study. All processing steps for obtaining the compact representation are fully automated, making this processing framework particularly attractive for handling large datasets.National Cancer Institute (U.S.) (1K25-CA181632-01)Athinoula A. Martinos Center for Biomedical Imaging (P41-RR014075)Athinoula A. Martinos Center for Biomedical Imaging (P41-EB015896)National Alliance for Medical Image Computing (U.S.) (U54-EB005149)Neuroimaging Analysis Center (U.S.) (P41-EB015902)National Center for Research Resources (U.S.) (U24 RR021382)National Institute of Biomedical Imaging and Bioengineering (U.S.) (5P41EB015896-15)National Institute of Biomedical Imaging and Bioengineering (U.S.) (R01EB006758)National Institute on Aging (AG022381)National Institute on Aging (5R01AG008122-22)National Institute on Aging (AG018344)National Institute on Aging (AG018386)National Center for Complementary and Alternative Medicine (U.S.) (RC1 AT005728-01)National Institute of Neurological Diseases and Stroke (U.S.) (R01 NS052585-01)National Institute of Neurological Diseases and Stroke (U.S.) (1R21NS072652-01)National Institute of Neurological Diseases and Stroke (U.S.) (1R01NS070963)National Institute of Neurological Diseases and Stroke (U.S.) (R01NS083534)National Institutes of Health (U.S.) ((5U01-MH093765

    BrainPrint: A discriminative characterization of brain morphology

    Get PDF
    We introduce BrainPrint, a compact and discriminative representation of brain morphology. BrainPrint captures shape information of an ensemble of cortical and subcortical structures by solving the eigenvalue problem of the 2D and 3D Laplace–Beltrami operator on triangular (boundary) and tetrahedral (volumetric) meshes. This discriminative characterization enables new ways to study the similarity between brains; the focus can either be on a specific brain structure of interest or on the overall brain similarity. We highlight four applications for BrainPrint in this article: (i) subject identification, (ii) age and sex prediction, (iii) brain asymmetry analysis, and (iv) potential genetic influences on brain morphology. The properties of BrainPrint require the derivation of new algorithms to account for the heterogeneous mix of brain structures with varying discriminative power. We conduct experiments on three datasets, including over 3000 MRI scans from the ADNI database, 436 MRI scans from the OASIS dataset, and 236 MRI scans from the VETSA twin study. All processing steps for obtaining the compact representation are fully automated, making this processing framework particularly attractive for handling large datasets.National Cancer Institute (U.S.) (1K25-CA181632-01)Athinoula A. Martinos Center for Biomedical Imaging (P41-RR014075)Athinoula A. Martinos Center for Biomedical Imaging (P41-EB015896)National Alliance for Medical Image Computing (U.S.) (U54-EB005149)Neuroimaging Analysis Center (U.S.) (P41-EB015902)National Center for Research Resources (U.S.) (U24 RR021382)National Institute of Biomedical Imaging and Bioengineering (U.S.) (5P41EB015896-15)National Institute of Biomedical Imaging and Bioengineering (U.S.) (R01EB006758)National Institute on Aging (AG022381)National Institute on Aging (5R01AG008122-22)National Institute on Aging (AG018344)National Institute on Aging (AG018386)National Center for Complementary and Alternative Medicine (U.S.) (RC1 AT005728-01)National Institute of Neurological Diseases and Stroke (U.S.) (R01 NS052585-01)National Institute of Neurological Diseases and Stroke (U.S.) (1R21NS072652-01)National Institute of Neurological Diseases and Stroke (U.S.) (1R01NS070963)National Institute of Neurological Diseases and Stroke (U.S.) (R01NS083534)National Institutes of Health (U.S.) ((5U01-MH093765

    Multi-Head Graph Convolutional Network for Structural Connectome Classification

    Full text link
    We tackle classification based on brain connectivity derived from diffusion magnetic resonance images. We propose a machine-learning model inspired by graph convolutional networks (GCNs), which takes a brain connectivity input graph and processes the data separately through a parallel GCN mechanism with multiple heads. The proposed network is a simple design that employs different heads involving graph convolutions focused on edges and nodes, capturing representations from the input data thoroughly. To test the ability of our model to extract complementary and representative features from brain connectivity data, we chose the task of sex classification. This quantifies the degree to which the connectome varies depending on the sex, which is important for improving our understanding of health and disease in both sexes. We show experiments on two publicly available datasets: PREVENT-AD (347 subjects) and OASIS3 (771 subjects). The proposed model demonstrates the highest performance compared to the existing machine-learning algorithms we tested, including classical methods and (graph and non-graph) deep learning. We provide a detailed analysis of each component of our model
    corecore